spiking neuron
Simplified Rules and Theoretical Analysis for Information Bottleneck Optimization and PCA with Spiking Neurons
We show that under suitable assumptions (primarily linearization) a simple and perspicuous online learning rule for Information Bottleneck optimization with spiking neurons can be derived. This rule performs on common benchmark tasks as well as a rather complex rule that has previously been proposed \cite{KlampflETAL:07b}. Furthermore, the transparency of this new learning rule makes a theoretical analysis of its convergence properties feasible. A variation of this learning rule (with sign changes) provides a theoretically founded method for performing Principal Component Analysis {(PCA)} with spiking neurons. By applying this rule to an ensemble of neurons, different principal components of the input can be extracted. In addition, it is possible to preferentially extract those principal components from incoming signals X that are related or are not related to some additional target signal Y_T .
Learning Exact Patterns of Quasi-synchronization among Spiking Neurons from Data on Multi-unit Recordings
This paper develops arguments for a family of temporal log-linear models to represent spatio-temporal correlations among the spiking events in a group of neurons. The models can represent not just pairwise correlations but also correlations of higher order. Methods are discussed for inferring the existence or absence of correlations and estimating their strength. A frequentist and a Bayesian approach to correlation detection are compared. The frequentist method is based on G 2 statistic with estimates obtained via the Max-Ent principle.
Distributed Synchrony of Spiking Neurons in a Hebbian Cell Assembly
We investigate the behavior of a Hebbian cell assembly of spiking neurons formed via a temporal synaptic learning curve. This learn(cid:173) ing function is based on recent experimental findings . It includes potentiation for short time delays between pre- and post-synaptic neuronal spiking, and depression for spiking events occuring in the reverse order. The coupling between the dynamics of the synaptic learning and of the neuronal activation leads to interesting results. We find that the cell assembly can fire asynchronously, but may also function in complete synchrony, or in distributed synchrony. The latter implies spontaneous division of the Hebbian cell assem(cid:173) bly into groups of cells that fire in a cyclic manner.
The Doubly Balanced Network of Spiking Neurons: A Memory Model with High Capacity
A balanced network leads to contradictory constraints on memory models, as exemplified in previous work on accommodation of synfire chains. Here we show that these constraints can be overcome by introducing a'shadow' inhibitory pattern for each excitatory pattern of the model. This is interpreted as a double- balance principle, whereby there exists both global balance between average excitatory and inhibitory currents and local balance between the currents carrying coherent activity at any given time frame. This principle can be applied to networks with Hebbian cell assemblies, leading to a high capacity of the associative memory. The number of possible patterns is limited by a combinatorial constraint that turns out to be P 0.06N within the specific model that we employ.
Information Dynamics and Emergent Computation in Recurrent Circuits of Spiking Neurons
We employ an efficient method using Bayesian and linear classifiers for analyzing the dynamics of information in high-dimensional states of generic cortical microcircuit models. It is shown that such recurrent cir- cuits of spiking neurons have an inherent capability to carry out rapid computations on complex spike patterns, merging information contained in the order of spike arrival with previously acquired context information.
Hierarchical Bayesian Inference in Networks of Spiking Neurons
There is growing evidence from psychophysical and neurophysiological studies that the brain utilizes Bayesian principles for inference and de- cision making. An important open question is how Bayesian inference for arbitrary graphical models can be implemented in networks of spik- ing neurons. In this paper, we show that recurrent networks of noisy integrate-and-fire neurons can perform approximate Bayesian inference for dynamic and hierarchical graphical models. The membrane potential dynamics of neurons is used to implement belief propagation in the log domain. The spiking probability of a neuron is shown to approximate the posterior probability of the preferred state encoded by the neuron, given past inputs.
Information Bottleneck Optimization and Independent Component Extraction with Spiking Neurons
The extraction of statistically independent components from high-dimensional multi-sensory input streams is assumed to be an essential component of sensory processing in the brain. Such independent component analysis (or blind source separation) could provide a less redundant representation of information about the external world. Another powerful processing strategy is to extract preferentially those components from high-dimensional input streams that are related to other information sources, such as internal predictions or proprioceptive feedback. This strategy allows the optimization of internal representation according to the infor- mation bottleneck method. However, concrete learning rules that implement these general unsupervised learning principles for spiking neurons are still missing.
Temporal Coding using the Response Properties of Spiking Neurons
In biological neurons, the timing of a spike depends on the timing of synaptic currents, in a way that is classically described by the Phase Response Curve. This has implications for temporal coding: an action potential that arrives on a synapse has an implicit meaning, that depends on the position of the postsynaptic neuron on the firing cycle. Here we show that this implicit code can be used to perform computations. Using theta neurons, we derive a spike-timing dependent learning rule from an error criterion. We demonstrate how to train an a uto-encoder neural network using this rule.
Contraction Properties of VLSI Cooperative Competitive Neural Networks of Spiking Neurons
A non–linear dynamic system is called contracting if initial conditions are for- gotten exponentially fast, so that all trajectories converge to a single trajectory. We use contraction theory to derive an upper bound for the strength of recurrent connections that guarantees contraction for complex neural networks. Specifi- cally, we apply this theory to a special class of recurrent networks, often called Cooperative Competitive Networks (CCNs), which are an abstract representation of the cooperative-competitive connectivity observed in cortex. This specific type of network is believed to play a major role in shaping cortical responses and se- lecting the relevant signal among distractors and noise. In this paper, we analyze contraction of combined CCNs of linear threshold units and verify the results of our analysis in a hybrid analog/digital VLSI CCN comprising spiking neurons and dynamic synapses.